Model calibration, which is concerned with how frequently the model predicts correctly, not only plays a vital part in statistical model design, but also has substantial practical applications, such as optimal decision-making in the real world. However, it has been discovered that modern deep neural networks are generally poorly calibrated due to the overestimation (or underestimation) of predictive confidence, which is closely related to overfitting. In this paper, we propose Annealing Double-Head, a simple-to-implement but highly effective architecture for calibrating the DNN during training. To be precise, we construct an additional calibration head-a shallow neural network that typically has one latent layer-on top of the last latent layer in the normal model to map the logits to the aligned confidence. Furthermore, a simple Annealing technique that dynamically scales the logits by calibration head in training procedure is developed to improve its performance. Under both the in-distribution and distributional shift circumstances, we exhaustively evaluate our Annealing Double-Head architecture on multiple pairs of contemporary DNN architectures and vision and speech datasets. We demonstrate that our method achieves state-of-the-art model calibration performance without post-processing while simultaneously providing comparable predictive accuracy in comparison to other recently proposed calibration methods on a range of learning tasks.
translated by 谷歌翻译
The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
Agriculture is at the heart of the solution to achieve sustainability in feeding the world population, but advancing our understanding on how agricultural output responds to climatic variability is still needed. Precision Agriculture (PA), which is a management strategy that uses technology such as remote sensing, Geographical Information System (GIS), and machine learning for decision making in the field, has emerged as a promising approach to enhance crop production, increase yield, and reduce water and nutrient losses and environmental impacts. In this context, multiple models to predict agricultural phenotypes, such as crop yield, from genomics (G), environment (E), weather and soil, and field management practices (M) have been developed. These models have traditionally been based on mechanistic or statistical approaches. However, AI approaches are intrinsically well-suited to model complex interactions and have more recently been developed, outperforming classical methods. Here, we present a Natural Language Processing (NLP)-based neural network architecture to process the G, E and M inputs and their interactions. We show that by modeling DNA as natural language, our approach performs better than previous approaches when tested for new environments and similarly to other approaches for unseen seed varieties.
translated by 谷歌翻译
Large language models (LLMs) have been shown to be able to perform new tasks based on a few demonstrations or natural language instructions. While these capabilities have led to widespread adoption, most LLMs are developed by resource-rich organizations and are frequently kept from the public. As a step towards democratizing this powerful technology, we present BLOOM, a 176B-parameter open-access language model designed and built thanks to a collaboration of hundreds of researchers. BLOOM is a decoder-only Transformer language model that was trained on the ROOTS corpus, a dataset comprising hundreds of sources in 46 natural and 13 programming languages (59 in total). We find that BLOOM achieves competitive performance on a wide variety of benchmarks, with stronger results after undergoing multitask prompted finetuning. To facilitate future research and applications using LLMs, we publicly release our models and code under the Responsible AI License.
translated by 谷歌翻译
To apply federated learning to drug discovery we developed a novel platform in the context of European Innovative Medicines Initiative (IMI) project MELLODDY (grant n{\deg}831472), which was comprised of 10 pharmaceutical companies, academic research labs, large industrial companies and startups. The MELLODDY platform was the first industry-scale platform to enable the creation of a global federated model for drug discovery without sharing the confidential data sets of the individual partners. The federated model was trained on the platform by aggregating the gradients of all contributing partners in a cryptographic, secure way following each training iteration. The platform was deployed on an Amazon Web Services (AWS) multi-account architecture running Kubernetes clusters in private subnets. Organisationally, the roles of the different partners were codified as different rights and permissions on the platform and administrated in a decentralized way. The MELLODDY platform generated new scientific discoveries which are described in a companion paper.
translated by 谷歌翻译
底面图像中的自动化视盘(OD)和光杯(OC)分割与有效测量垂直杯盘比率(VCDR)是一种在眼科中常用的生物标志物,以确定胶状神经神经病变的程度。通常,这是使用粗到1的深度学习算法来解决的,其中第一阶段近似于OD,第二阶段使用该区域的作物来预测OD/OC掩码。尽管这种方法广泛应用于文献中,但尚无研究来分析其对结果的真正贡献。在本文中,我们介绍了使用5个公共数据库的不同粗到精细设计的全面分析,包括从标准分割的角度以及估算青光眼评估的VCDR。我们的分析表明,这些算法不一定超过标准的多级单阶段模型,尤其是当这些算法是从足够大而多样化的训练集中学习的。此外,我们注意到粗糙阶段比精细的OD分割结果更好,并且在第二阶段提供OD监督对于确保准确的OC掩码至关重要。此外,在多数据集设置上训练的单阶段和两阶段模型都表现出对成对的结果,甚至比其他最先进的替代方案更好,同时排名第一的OD/OC分段。最后,我们评估了VCDR预测的模型与Airogs图像子集中的六个眼科医生相比,以在观察者间可变性的背景下理解它们。我们注意到,即使从单阶段和粗至细节模型中恢复的VCDR估计值也可以获得良好的青光眼检测结果,即使它们与专家的手动测量不高度相关。
translated by 谷歌翻译
肿瘤分割是放疗治疗计划的基本步骤。为了确定口咽癌患者(OPC)原发性肿瘤(GTVP)的准确分割,需要同时评估不同图像模态,并从不同方向探索每个图像体积。此外,分割的手动固定边界忽略了肿瘤描述中已知的空间不确定性。这项研究提出了一种新型的自动深度学习(DL)模型,以在注册的FDG PET/CT图像上进行逐片自适应GTVP分割的辐射肿瘤学家。我们包括138名在我们研究所接受过(化学)辐射治疗的OPC患者。我们的DL框架利用了间和板板的上下文。连续3片的串联FDG PET/CT图像和GTVP轮廓的序列用作输入。进行了3倍的交叉验证,进行了3​​次,对从113例患者的轴向(a),矢状(s)和冠状(c)平面提取的序列进行了训练。由于体积中的连续序列包含重叠的切片,因此每个切片产生了平均的三个结果预测。在A,S和C平面中,输出显示具有预测肿瘤的概率不同的区域。使用平均骰子得分系数(DSC)评估了25名患者的模型性能。预测是最接近地面真理的概率阈值(在A中为0.70,s为0.70,在s中为0.77,在C平面中为0.80)。提出的DL模型的有希望的结果表明,注册的FDG PET/CT图像上的概率图可以指导逐片自适应GTVP分割中的辐射肿瘤学家。
translated by 谷歌翻译
磁共振成像(MRI)是中风成像的中心方式。它被用来接受患者的治疗决定,例如选择患者进行静脉溶栓或血管内治疗。随后在住院期间使用MRI来通过可视化梗塞核心大小和位置来预测结果。此外,它可以用来表征中风病因,例如(心脏) - 栓塞和非胚胎中风之间的区分。基于计算机的自动医疗图像处理越来越多地进入临床常规。缺血性中风病变分割(ISLE)挑战的先前迭代有助于生成鉴定急性和急性缺血性中风病变分割的基准方法。在这里,我们介绍了一个专家注册的多中心MRI数据集,以分割急性到亚急性中风病变。该数据集包括400个多供应商MRI案例,中风病变大小,数量和位置的可变性很高。它分为n = 250的训练数据集和n = 150的测试数据集。所有培训数据将公开可用。测试数据集将仅用于模型验证,并且不会向公众发布。该数据集是Isles 2022挑战的基础,目的是找到算法方法,以实现缺血性中风的稳健和准确分割算法的开发和基准测试。
translated by 谷歌翻译
有必要开发负担得起且可靠的诊断工具,该工具允许包含COVID-19的扩散。已经提出了机器学习(ML)算法来设计支持决策系统以评估胸部X射线图像,事实证明,这些图像可用于检测和评估疾病进展。许多研究文章围绕此主题发表,这使得很难确定未来工作的最佳方法。本文介绍了使用胸部X射线图像应用于COVID-19检测的ML的系统综述,旨在就方法,体系结构,数据库和当前局限性为研究人员提供基线。
translated by 谷歌翻译
语言模型既展示了定量的改进,又展示了新的定性功能,随着规模的增加。尽管它们具有潜在的变革性影响,但这些新能力的特征却很差。为了为未来的研究提供信息,为破坏性的新模型能力做准备,并改善社会有害的效果,至关重要的是,我们必须了解目前和近乎未来的能力和语言模型的局限性。为了应对这一挑战,我们介绍了超越模仿游戏基准(Big Bench)。 Big Bench目前由204个任务组成,由132家机构的442位作者贡献。任务主题是多样的,从语言学,儿童发展,数学,常识性推理,生物学,物理学,社会偏见,软件开发等等。 Big-Bench专注于被认为超出当前语言模型的功能的任务。我们评估了OpenAI的GPT型号,Google内部密集变压器体系结构和大型基础上的开关稀疏变压器的行为,跨越了数百万到数十亿个参数。此外,一个人类专家评估者团队执行了所有任务,以提供强大的基准。研究结果包括:模型性能和校准都随规模改善,但绝对的术语(以及与评估者的性能相比);在模型类中的性能非常相似,尽管带有稀疏性。逐渐和预测的任务通常涉及大量知识或记忆成分,而在临界规模上表现出“突破性”行为的任务通常涉及多个步骤或组成部分或脆性指标;社交偏见通常会随着含糊不清的环境而随着规模而增加,但这可以通过提示来改善。
translated by 谷歌翻译